Petascale Computing for Future Breakthroughs in Global Seismology
نویسندگان
چکیده
Will the advent of “petascale” computers be relevant to research in global seismic tomography? We illustrate here in detail two possible consequences of the expected leap in computing capability. First, being able to identify larger sets of differently regularized/parameterized solutions in shorter times will allow to evaluate their relative quality by more accurate statistical criteria than in the past. Second, it will become possible to compile large databases of sensitivity kernels, and update them efficiently in a non-linear inversion while iterating towards an optimal solution. We quantify the expected computational cost of the above endeavors, as a function of model resolution, and of the highest considered seismic-wave frequency.
منابع مشابه
Adjusting process count on demand for petascale global optimization
A B S T R A C T There are many challenges that need to be met before efficient and reliable computation at the petascale is possible. Many scientific and engineering codes running at the petascale are likely to be memory intensive, which makes thrashing a serious problem for many petascale applications. One way to overcome this challenge is to use a dynamic number of processes, so that the tota...
متن کاملProgramming Challenges for Petascale and Multicore Parallel Systems
This decade marks a resurgence for parallel computing with high-end systems moving to petascale and mainstream systems moving to multi-core processors. Unlike previous generations of hardware evolution, this shift will have a major impact on existing software. For petascale, it is widely recognized by application experts that past approaches based on domain decomposition will not scale to explo...
متن کاملCheckpointing vs. Migration for Post-Petascale Machines
We craft a few scenarios for the execution of sequential and parallel jobs on future generation machines. Checkpointing or migration, which technique to choose?
متن کاملOptimization of Docking Conformations Using Grid Datafarm
Grid Datafarm (GFarm) is a Japanese national project that aims to design an infrastructure for global petascale data intensive computing. GFarm tools and APIs are provided to handle large data files in both single filesystem image and local file views. While the Grid Datafarm is originally motivated by high energy physics applications, it is a generic distributed I/O management and scheduling i...
متن کاملWorldwide Fast File Replication on Grid Datafarm
The Grid Datafarm architecture is designed for global petascale data-intensive computing. It provides a global parallel filesystem with online petascale storage, scalable I/O bandwidth, and scalable parallel processing, and it can exploit local I/O in a grid of clusters with tens of thousands of nodes. One of features is that it manages file replicas in filesystem metadata for fault tolerance a...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2007